24 research outputs found

    The Role of Vision on Spatial Competence

    Get PDF
    Several pieces of evidence indicate that visual experience during development is fundamental to acquire long-term spatial capabilities. For instance, reaching abilities tend to emerge at 5 months of age in sighted infants, while only later at 10 months of age in blind infants. Moreover, other spatial skills such as auditory localization and haptic orientation discrimination tend to be delayed or impaired in visually impaired children, with a huge impact on the development of sighted-like perceptual and cognitive asset. Here, we report an overview of studies showing that the lack of vision can interfere with the development of coherent multisensory spatial representations and highlight the contribution of current research in designing new tools to support the acquisition of spatial capabilities during childhood

    Allocentric spatial perception through vision and touch in sighted and blind children.

    Get PDF
    Abstract Vision and touch play a critical role in spatial development, facilitating the acquisition of allocentric and egocentric frames of reference, respectively. Previous works have shown that children's ability to adopt an allocentric frame of reference might be impaired by the absence of visual experience during growth. In the current work, we investigated whether visual deprivation also impairs the ability to shift from egocentric to allocentric frames of reference in a switching-perspective task performed in the visual and haptic domains. Children with and without visual impairments from 6 to 13 years of age were asked to visually (only sighted children) or haptically (blindfolded sighted children and blind children) explore and reproduce a spatial configuration of coins by assuming either an egocentric perspective or an allocentric perspective. Results indicated that temporary visual deprivation impaired the ability of blindfolded sighted children to switch from egocentric to allocentric perspective more in the haptic domain than in the visual domain. Moreover, results on visually impaired children indicated that blindness did not impair allocentric spatial coding in the haptic domain but rather affected the ability to rely on haptic egocentric cues in the switching-perspective task. Finally, our findings suggested that the total absence of vision might impair the development of an egocentric perspective in case of body midline-crossing targets

    Clinical assessment of the TechArm system on visually impaired and blind children during uni- and multi-sensory perception tasks

    Get PDF
    We developed the TechArm system as a novel technological tool intended for visual rehabilitation settings. The system is designed to provide a quantitative assessment of the stage of development of perceptual and functional skills that are normally vision-dependent, and to be integrated in customized training protocols. Indeed, the system can provide uni- and multisensory stimulation, allowing visually impaired people to train their capability of correctly interpreting non-visual cues from the environment. Importantly, the TechArm is suitable to be used by very young children, when the rehabilitative potential is maximal. In the present work, we validated the TechArm system on a pediatric population of low-vision, blind, and sighted children. In particular, four TechArm units were used to deliver uni- (audio or tactile) or multi-sensory stimulation (audio-tactile) on the participant's arm, and subject was asked to evaluate the number of active units. Results showed no significant difference among groups (normal or impaired vision). Overall, we observed the best performance in tactile condition, while auditory accuracy was around chance level. Also, we found that the audio-tactile condition is better than the audio condition alone, suggesting that multisensory stimulation is beneficial when perceptual accuracy and precision are low. Interestingly, we observed that for low-vision children the accuracy in audio condition improved proportionally to the severity of the visual impairment. Our findings confirmed the TechArm system's effectiveness in assessing perceptual competencies in sighted and visually impaired children, and its potential to be used to develop personalized rehabilitation programs for people with visual and sensory impairments

    Recognizing the same face in different contexts : Testing within-person face recognition in typical development and in autism

    Get PDF
    Unfamiliar face recognition follows a particularly protracted developmental trajectory and is more likely to be atypical in children with autism than those without autism. There is a paucity of research, however, examining the ability to recognize the same face across multiple naturally varying images. Here, we investigated within-person face recognition in children with and without autism. In Experiment 1, typically developing 6- and 7-year-olds, 8- and 9-year-olds, 10- and 11-year-olds, 12- to 14-year-olds, and adults were given 40 grayscale photographs of two distinct male identities (20 of each face taken at different ages, from different angles, and in different lighting conditions) and were asked to sort them by identity. Children mistook images of the same person as images of different people, subdividing each individual into many perceived identities. Younger children divided images into more perceived identities than adults and also made more misidentification errors (placing two different identities together in the same group) than older children and adults. In Experiment 2, we used the same procedure with 32 cognitively able children with autism. Autistic children reported a similar number of identities and made similar numbers of misidentification errors to a group of typical children of similar age and ability. Fine-grained analysis using matrices revealed marginal group differences in overall performance. We suggest that the immature performance in typical and autistic children could arise from problems extracting the perceptual commonalities from different images of the same person and building stable representations of facial identity

    Proprieta salutistiche dell'olio d'oliva e valorizzazione dei suoi componenti fenolici e polifenolici nutraceutici.

    No full text
    Il lavoro sperimentale di questa tesi si inserisce in un ampio progetto multidisciplinare che si propone di studiare le proprietà nutraceutiche dell’olio d’oliva tal quale e dei suoi componenti fenolici e polifenolici, quali tirosolo, idrosssitorosolo, oleuropeina, e oleocantale ed oleaceina per i quali non vi è ancora un health claim. A tal proposito risulta fondamentale la disponibilità di metodiche analitiche per il dosaggio dei suddetti componenti fenolici e polifenolici, nonché di standard puri che possano essere utilizzati sia per gli studi analitici, che per studi biochimico-farmacologici e per studi di tecnologia farmaceutica. In particolare, tra i fenoli e polifenoli oggetto delle ricerche, questa tesi è stata focalizzata allo sviluppo di metodiche analitiche per l’oleocantale, sia mediante HPLC che mediante risonanza magnetica nucleare. Inoltre, non essendo disponibile in commercio l’oleocantale, parte del lavoro sperimentale è stato rivolto verso la sintesi del suddetto composto, anche ottimizzando vie di sintesi già note

    Audio Feedback Associated With Body Movement Enhances Audio and Somatosensory Spatial Representation

    No full text
    In the last years, the positive impact of sensorimotor rehabilitation training on spatial abilities has been taken into account, e.g., providing evidence that combined multimodal compared to unimodal feedback improves responsiveness to spatial stimuli. To date, it still remains unclear to which extent spatial learning is influenced by training conditions. Here we investigated the effects of active and passive audio-motor training on spatial perception in the auditory and proprioceptive domains on 36 healthy young adults. First, to investigate the role of voluntary movements on spatial perception, we compared the effects of active vs. passive multimodal training on auditory and proprioceptive spatial localization. Second, to investigate the effectiveness of unimodal training conditions on spatial perception, we compared the impact of only proprioceptive or only auditory sensory feedback on spatial localization. Finally, to understand whether the positive effects of multimodal and unimodal trainings generalize to the untrained part, both dominant and non-dominant arms were tested. Results indicate that passive multimodal training (guided movement) is more beneficial than active multimodal training (active exploration) and only in passive condition the improvement is generalized also on the untrained hand. Moreover, we found that combined audio-motor training provides the strongest benefit because it significantly affects both auditory and somatosensory localization, while the effect of a single feedback modality is limited to a single domain, indicating a cross-modal influence of the two domains. Therefore, the use of multimodal feedback is more efficient in improving spatial perception. These results indicate that combined sensorimotor signals are effective in recalibrating auditory and proprioceptive spatial perception and that the beneficial effect is mainly due to the combination of auditory and proprioceptive spatial cues

    Assessing social competence in visually impaired people and proposing an interventional program in visually impaired children

    Get PDF
    Visually impaired children and adults have difficulties in engaging in positive social interactions. This study assesses social competence in sighted and visually impaired people and to propose a novel interventional strategy in visually impaired children. We designed a task that assesses the ability to initiate and sustain an interaction with the experimenter while performing free hand movements using a sonorous feedback on the experimenter’s wrist. Both participant and experimenter kinematic data were recorded with a motion capture system. The level of social interaction between participant and experimenter has been computed through objective measurements based on Granger causality analysis applied to the participant arm kinematics. The interventional program followed by the visually impaired children lasted 12 weeks and consisted in a series of spatial and social games performed with the use of a sonorous bracelet which provides an auditory feedback of body actions in space. Visually impaired individuals present a poorer communication flow with the experimenter than sighted people, which indicates a less efficient social interaction. The amount of communication between the two agents resulted in a significant improvement after the interventional program. Thus, a specific intervention, based on the substitution of visual with auditory feedback of body actions, can enhance social inclusion for the blind population
    corecore